perm filename COMPAR.1[DIS,DBL]1 blob sn#208272 filedate 1976-03-29 generic text, type T, neo UTF8
.ASEC(Bibliography)

.ASSEC(Comparison to Other Systems)

One popular way to explicate a system's design ideas is to compare it to other,
similar systems, and/or to others' proposed criteria for such systems. There is
virtually no similar project known to the author, despite an exhaustive search
(see Bibliography). A couple tangential efforts will be mentioned, followed by
a discussion of how AM will measure up to the understanding standards set
forth by Moore and Newell in their MERLIN paper.
Next comes a listing of the books which were read, and finally a bibliography
of relevant
articles.

Several projects have been undertaken which comprise a small piece of the proposed
system, plus deep concentration on some area ⊗4not⊗* under study here. For example,
Boyer and Moore's theorem-prover embodies some of the spirit of this effort, but its
knowledge base is minimal and its methods purely formal.  Badre's CLET system worked
on learning  the decimal addition algorithm
$$ Given the addition table up to 10 + 10,
plus an English text description of what it
means to carry, how and when to carry, etc.* but the ⊗4mathematics discovery⊗*
aspects of the
system were neither emphasized nor worth emphasizing; it was an interesting natural
language communication study. 
The same comment applies to several related studies 
by IMSSS$$See [Smith], for example.*.
Gelernter has worked on using prototypical examples
as analogic models to guide search in geometry, and Bundy has used "sticks" to help
his program work with natural numbers.  
Kling has studied the single heuristic of analogy, and Brotz has written a
system which uses this to propose useful lemmata; both of these are set up as
theorem provers, again not as discoverers.
One aspect that each of these systems lacked
was size: they all worked in tiny toy domains, with miniscule, carefully prearranged
knowledge bases, with just enough information to do the job well, but not so much that
the system might be swamped. AM is open to all the advantages and all
the dangers of a non-toy system with a massive corpus of data to manage.  The other
systems did not deal with intuition, or indeed any multiple knowlege source (except
examples or syntactic analogy). 
Certainly none has considered the paradigm of ⊗4discovery and evaluation of
the interestingness of structure⊗*; the others have been "here is your task, try and
prove it,"  or, in Badre's case, "here is the answer, try and translate/use it."

There is very little thought about discovery in mathematics from an algorithmic
point of view; even clear thinkers like Polya and Poincare' treat mathematical 
ability as a sacred, almost mystic quality, tied to the unconscious.
The writings of philosophers and psychologists invariably attempt to examine
human performance and belief, which are far  more manageable than creativity
in vitro.  Belief formulae in inductive logic (eg., Carnap, Pietarinin) 
invariably fall back upon how well they fit human measurements. The abilities of
a computer and a brain are too distinct to consider blindly working for results
(let alone algorithms!) one possesses which match those of the other.

In an earlier section we discussed criteria for the system.
Two important criteria are final performance and initial starting point.
That is, what is it given (including the knowledge in the program environment),
and what does AM do with that information?  Moore and Newell have published some
reasonable design issues for any proposed understanding system, and we shall now
see how our system answers their questions$$
Each point of the taxonomy which they
provide before these questions is covered by the proposed system.*.

.BEGIN W(6) 

Representation: Families of BEINGs, simple situation/rules, opaque functions.
	Scope: Each family of BEINGs characterizes one type of knowledge. 
			Each BEING represents one very specialized expert.
			The opaque functions can represent intuition and the real world.
	Grain: Partial knowledge about a topic X is naturally expressed as an incomplete BEING X.
	Multiple representations: Each differently-named part has its own format, so, e.g.,
		examples of an operation can be stored as i/o pairs, the intuition points to an
		opaque function, the recognition section is sit/action productions, the
		algorithms part is a quasi-executable partially-ordered list of things to try.
Action: Most knowledge is stored in BEING-parts in a nearly-executable way; the remainder is
	stored so that the "active" segment can easily use it as it runs.  The place that
	a piece of information is stored is carefully chosen so that it will be evoked
	in almost all the situations in which it is relevant.  The only real action in the
	system is the selective completion of BEINGs parts (occasionally creating a new BEING).
Assimilation: There is no sharp distinction between the internal knowledge and the
	task; the task is really nothing more than to extend the given knowledge while
	maintaining interest and asethetic worth.  The only external entities are the
	user and the simulated physical world. Contact with the first is through a
	simpleminded translation scheme, with the latter through evaluation of opaque
	functions on observable data and examination of the results.
Accomodation: translation of alien messages; inference from (simulated) real-world examples data.
Directionality: The Environment gathers up the relevant knowledge at each step to fill
	in the currently worked-on part of the current BEING, simply by asking that part
	(its archetypical representative), that BEING, and its Tied BEINGs what to do.
	Keep-progressing: at each stage, there will be hundreds or thousands of unfilled-in
		parts, and the system simply chooses the most interesting one to work on.
Efficiency: 
	Interpreter: Will the contents of BEING's parts be compilable, or must they remain
		completely inspectable? One alternative is to provide two versions, one
		fast one for executing and one transparent one for examining. 
		Also provide access to a compiler, to recompile any changed (or new) part.
	Immediacy: There need not be close, rapidifire comunication with a human,
		but whenever communicating with him, time ⊗4will⊗* be important; thus the
		only requirement on speed is placed upon the translation modules, and
		they are fairly simple (due to the clean nature of the mathematical domain).
	Formality: There is a probabilistic belief rating for everything, and a descriptive
		"Justifications" component for all BEINGs for which it is meaningful.
		There are experts who know about Bugs, Debugging, Contradiction, etc.
		Frame problem: when the world changes, make no effort to update everything.
			Whenever a contradiction is encountered, study its origins and
			recompute belief values until it goes away.
Depth of Understanding:  Each BEING is an expert, one of whose duties is to announce his
	own relevance whenever he recognizes it. The specific desire will generally
	indicate which part of the relevant BEING is the one to examine. In case this loses,
	each BEING has a part which (on the basis of how it failed) points to alternatives.
	Access to all implications: The intuitive functions must simulate this ability,
		since they are to be analogic. The BEINGs certainly don't have such access.

.END